Unsupervised Data Augmentation
Unsupervised Data Augmentation (UDA) makes use of both labelled data and unlabeled data and computes the loss function using standard methods for supervised learning to train the model.
Unsupervised Data Augmentation for Consistency Training
Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce.
Augmentation Strategies for Different Tasks
Confidence-Based Masking
Specifically, in each minibatch, the consistency loss term is computed only on examples whose highest probability among classification categories is greater than a threshold β. β=0.8 for CIFAR-10 and SVHN and β=0.5 for ImageNet.
Sharpening Predictions
Since regularizing the predictions to have low entropy has been shown to be beneficial, predictions are sharpen when computing the target distribution on unlabeled examples by using a low Softmax temperature τ.
Learning Materials
unsupervised data augmentation for consistency training
Unsupervised Data Augmentation and its types